261 research outputs found

    Fast non-negative deconvolution for spike train inference from population calcium imaging

    Full text link
    Calcium imaging for observing spiking activity from large populations of neurons are quickly gaining popularity. While the raw data are fluorescence movies, the underlying spike trains are of interest. This work presents a fast non-negative deconvolution filter to infer the approximately most likely spike train for each neuron, given the fluorescence observations. This algorithm outperforms optimal linear deconvolution (Wiener filtering) on both simulated and biological data. The performance gains come from restricting the inferred spike trains to be positive (using an interior-point method), unlike the Wiener filter. The algorithm is fast enough that even when imaging over 100 neurons, inference can be performed on the set of all observed traces faster than real-time. Performing optimal spatial filtering on the images further refines the estimates. Importantly, all the parameters required to perform the inference can be estimated using only the fluorescence data, obviating the need to perform joint electrophysiological and imaging calibration experiments.Comment: 22 pages, 10 figure

    Testing probability distributions underlying aggregated data

    Full text link
    In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution DD over [n][n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm AA can both sample from DD and respectively, for any i∈[n]i\in[n], - query the probability mass D(i)D(i) (query access); or - get the total mass of {1,…,i}\{1,\dots,i\}, i.e. βˆ‘j=1iD(j)\sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power

    A point process framework for modeling electrical stimulation of the auditory nerve

    Full text link
    Model-based studies of auditory nerve responses to electrical stimulation can provide insight into the functioning of cochlear implants. Ideally, these studies can identify limitations in sound processing strategies and lead to improved methods for providing sound information to cochlear implant users. To accomplish this, models must accurately describe auditory nerve spiking while avoiding excessive complexity that would preclude large-scale simulations of populations of auditory nerve fibers and obscure insight into the mechanisms that influence neural encoding of sound information. In this spirit, we develop a point process model of the auditory nerve that provides a compact and accurate description of neural responses to electric stimulation. Inspired by the framework of generalized linear models, the proposed model consists of a cascade of linear and nonlinear stages. We show how each of these stages can be associated with biophysical mechanisms and related to models of neuronal dynamics. Moreover, we derive a semi-analytical procedure that uniquely determines each parameter in the model on the basis of fundamental statistics from recordings of single fiber responses to electric stimulation, including threshold, relative spread, jitter, and chronaxie. The model also accounts for refractory and summation effects that influence the responses of auditory nerve fibers to high pulse rate stimulation. Throughout, we compare model predictions to published physiological data and explain differences in auditory nerve responses to high and low pulse rate stimulation. We close by performing an ideal observer analysis of simulated spike trains in response to sinusoidally amplitude modulated stimuli and find that carrier pulse rate does not affect modulation detection thresholds.Comment: 1 title page, 27 manuscript pages, 14 figures, 1 table, 1 appendi

    Factorization of natural 4 Γ— 4 patch distributions

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-30212-4_15Revised and Selected Papers of ECCV 2004 Workshop SMVP 2004, Prague, Czech Republic, May 16, 2004The lack of sufficient machine readable images makes impossible the direct computation of natural image 4 Γ— 4 block statistics and one has to resort to indirect approximated methods to reduce their domain space. A natural approach to this is to collect statistics over compressed images; if the reconstruction quality is good enough, these statistics will be sufficiently representative. However, a requirement for easier statistics collection is that the method used provides a uniform representation of the compression information across all patches, something for which codebook techniques are well suited. We shall follow this approach here, using a fractal compression–inspired quantization scheme to approximate a given patch B by a triplet (D B , ΞΌ B , Οƒ B ) with Οƒ B the patch’s contrast, ΞΌ B its brightness and D B a codebook approximation to the mean–variance normalization (B – ΞΌ B )/Οƒ B of B. The resulting reduction of the domain space makes feasible the computation of entropy and mutual information estimates that, in turn, suggest a factorization of the approximation of p(B) ≃ p(D B , ΞΌ B , Οƒ B ) as p(D B , ΞΌ B , Οƒ B ) ≃ p(D B )p(ΞΌ)p(Οƒ)Ξ¦(|| βˆ‡ ||), with Ξ¦ being a high contrast correction.With partial support of Spain’s CICyT, TIC 01–57

    A Generalized Linear Model for Estimating Spectrotemporal Receptive Fields from Responses to Natural Sounds

    Get PDF
    In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cell's input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons

    Density-dependence of functional development in spiking cortical networks grown in vitro

    Full text link
    During development, the mammalian brain differentiates into specialized regions with distinct functional abilities. While many factors contribute to functional specialization, we explore the effect of neuronal density on the development of neuronal interactions in vitro. Two types of cortical networks, dense and sparse, with 50,000 and 12,000 total cells respectively, are studied. Activation graphs that represent pairwise neuronal interactions are constructed using a competitive first response model. These graphs reveal that, during development in vitro, dense networks form activation connections earlier than sparse networks. Link entropy analysis of dense net- work activation graphs suggests that the majority of connections between electrodes are reciprocal in nature. Information theoretic measures reveal that early functional information interactions (among 3 cells) are synergetic in both dense and sparse networks. However, during later stages of development, previously synergetic relationships become primarily redundant in dense, but not in sparse networks. Large link entropy values in the activation graph are related to the domination of redundant ensembles in late stages of development in dense networks. Results demonstrate differences between dense and sparse networks in terms of informational groups, pairwise relationships, and activation graphs. These differences suggest that variations in cell density may result in different functional specialization of nervous system tissue in vivo.Comment: 10 pages, 7 figure

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
    • …
    corecore